Learning efficient and interpretable policies has been a challenging task in reinforcement learning (RL), particularly in the visual RL setting with complex scenes. While neural networks have achieved competitive performance, the resulting policies are often over-parameterized black boxes that are difficult to interpret and deploy efficiently. More recent symbolic RL frameworks have shown that high-level domain-specific programming logic can be designed to handle both policy learning and symbolic planning. However, these approaches rely on coded primitives with little feature learning, and when applied to high-dimensional visual scenes, they can suffer from scalability issues and perform poorly when images have complex object interactions. To address these challenges, we propose \textit{Differentiable Symbolic Expression Search} (DiffSES), a novel symbolic learning approach that discovers discrete symbolic policies using partially differentiable optimization. By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions, while also incorporating the strengths of neural networks for feature learning and optimization. Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more and scalable than state-of-the-art symbolic RL methods, with a reduced amount of symbolic prior knowledge.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In many real-world settings agents engage in strategic interactions with multiple opposing agents who can employ a wide variety of strategies. The standard approach for designing agents for such settings is to compute or approximate a relevant game-theoretic solution concept such as Nash equilibrium and then follow the prescribed strategy. However, such a strategy ignores any observations of opponents' play, which may indicate shortcomings that can be exploited. We present an approach for opponent modeling in multiplayer imperfect-information games where we collect observations of opponents' play through repeated interactions. We run experiments against a wide variety of real opponents and exact Nash equilibrium strategies in three-player Kuhn poker and show that our algorithm significantly outperforms all of the agents, including the exact Nash equilibrium strategies.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
尽管地面望远镜已经发现了许多近地的物体,但观测值却错过了一些快速移动的物体,尤其是那些近地检测限制的物体。我们开发了一个卷积神经网络,用于检测微弱的快速移动近地物体。它是通过模拟产生的人造条纹训练的,并且能够在模拟数据上找到这些小行星条纹的精度为98.7%,虚假正率为0.02%。该程序用于在2019年的四个晚上搜索来自Zwicky瞬态设施(ZTF)的图像数据,并确定了六个先前未被发现的小行星。我们的检测的视觉幅度范围为〜19.0-20.3,运动速率范围为〜6.8-24 dEG/天,与其他ZTF检测相比,这非常微弱。我们的小行星的大小也〜1-51 m,在近距离接近时〜5-60个月距距离〜5-60个月距离距离,假设其反照率值遵循已知的小行星的反照率分布函数。使用纯模拟的数据集来训练我们的模型,使该程序能够在检测微弱和快速移动的对象方面获得灵敏度,同时仍然能够恢复几乎所有使用真实检测来训练神经网络的神经网络几乎所有发现。我们的方法可以被任何观测员用于检测快速移动的小行星条纹。
translated by 谷歌翻译
已经在图形图上进行了大量研究,但是许多现有方法仅着重于优化图形布局的特定美学方面。给定图形,生成满足某些人类美学偏好的良好布局仍然是一项具有挑战性的任务,尤其是如果无法将这种偏好表示为可区分的目标函数。在本文中,我们提出了一个基于学生教师GAN的图形绘图框架SmartGD,该框架学会了绘制图形,就像人类学习执行任务一样。 SmartGD中的学生网络通过模仿良好的布局示例来学习图形,而SmartGD的教师网络负责提供有关生成布局优点的评分。当缺乏具体的审美标准来指定构成良好布局的内容时,学生网络可以从良好的布局示例中学习。另一方面,如果可以通过定量标准评估布局的好处(即使不是可区分的),学生网络可以将其用作优化目标美学的具体目标。为了实现目标,我们提出了一种新颖的gan变体,自挑战的gan,以了解有关任何审美标准的最佳布局分布,无论标准是否可区分。所提出的图形绘图框架不仅可以以与良好的布局示例相似的样式绘制图形,而且还可以根据任何给定的美学标准优化图形布局。一旦训练了模型,就可以根据示例布局的样式或所选美学标准可视化任意图。全面的实验研究表明,根据普通商定的指标,SMARTGD优于12种基准方法。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
在本文中,我们设计和训练生成的图像到文本变压器Git,以统一视觉语言任务,例如图像/视频字幕和问题答案。尽管生成模型在预训练和微调之间提供了一致的网络体系结构,但现有工作通常包含复杂的结构(Uni/多模式编码器/解码器),并取决于外部模块,例如对象检测器/标记器和光学角色识别(OCR) )。在git中,我们将体系结构简化为一个图像编码器,而在单语言建模任务下将架构简化为一个文本解码器。我们还扩展了预训练数据和模型大小,以提高模型性能。没有铃铛和哨子,我们的git在12个具有挑战性的基准下建立了新的艺术状态。例如,我们的模型在文本贴图上首次超过了人类的表现(138.2 vs. 125.5在苹果酒中)。此外,我们提出了一种新的基于一代的图像分类和场景文本识别的方案,在标准基准上实现了不错的表现。
translated by 谷歌翻译
报告了基于小波的算法以提高语音清晰度以及完整数据集和结果的优化。通过多级离散小波变换,离散的语音信号分为频率子频段。在重组以形成演讲的修改版本之前,将各种收益应用于子兰信号。在保持总体信号能量不变的同时,调整了子带的收益,并使用Google语音到文本转录在各种背景干扰和模拟听力损失条件下进行语音清晰度得到了客观和定量的评估。一组通用的子带收益可以在高达4.8 dB的一系列噪声与信号比率上起作用。对于无噪声的语音,通过将光谱能量重新分配给中频频带,总体可理解性得到提高,Google的转录精度平均提高了16.9个百分点,最大值提高了86.7个百分点。对于已经被噪声损坏的语音,提高清晰度是具有挑战性的,但仍然可以实现,而转录精度的平均为9.5个百分点,最高为71.4。所提出的算法可用于实时语音处理,并且比以前的算法更简单。潜在的应用包括语音增强,助听器,机器聆听以及对语音清晰度的更好理解。
translated by 谷歌翻译
我们展示了在文本上预先培训的神经网络,并在代码上进行微调解决数学问题,通过程序合成解决了数学问题。我们将问题转化为编程任务,自动生成程序,然后从MIT的大型数学课程(单变微积分18.01,多变量计算18.02,微分方程18.03,概率和统计介绍18.05,概率和统计概要和统计概要和统计概要和统计概要和统计概要和统计概要和统计概要和统计概况概要和统计概要和统计概要和统计概率概述的大学级问题。 18.06,以及计算机科学的数学6.042)以及数学数据集的问题(在预先发生的地板,代数,计数和概率,数字理论和前进的问题上),最新数学问题的基准专门用于评估数学推理。我们探索提示生成方法,使变形金刚能够为这些主题生成问题解决程序,包括具有图的解决方案。我们在每个主题中的随机问题上生成正确的答案。我们量化了原始和转型问题之间的差距,并进行了调查以评估所产生的问题的质量和难度。这是在规模上自动解决,等级和生成大学数学课程问题的第一项工作,这代表了高等教育的里程碑。
translated by 谷歌翻译